-
Notifications
You must be signed in to change notification settings - Fork 423
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
permit opt tokenizer #1958
permit opt tokenizer #1958
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, see the one comment
20240ce
to
f8b834d
Compare
f8b834d
to
6c44713
Compare
@bmosaicml aren't the OPT scores a bit low? From our internal data, I see: OPT 1.3B piqa 0-shot (not 5-shot): 0.7236 Vs:
|
yes they are a bit low....where are you getting your numbers from? |
Blocking until the low numbers are resolved
@dakinggg @abhi-mosaic Sorry, the initial numbers I wrote were for OPT 125m I just retested with OPT 1.3B and the numbers match what we expect! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fantastic! re-approving :)
What does this PR do?
Add functionality to support ICL with OPT tokenizer (which prepends special tokens each time tokenizer is called). Due to the way we concatenate context and continuation encodings into a single input, it's important that no special tokens are put between them.
GPT Neo 1.3 B gets following accuracy
metrics/piqa/5-shot/InContextLearningMultipleChoiceAccuracy: 0.717934787273407
metrics/lambada/0-shot/InContextLearningLMAccuracy: 0.5883721113204956
OPT 1.3B gets:
metrics/piqa/5-shot/InContextLearningMultipleChoiceAccuracy: 0.719565212726593
metrics/lambada/0-shot/InContextLearningLMAccuracy: 0.5883721113204956
MosaicGPT 1.3B gets:
metrics/piqa/5-shot/InContextLearningMultipleChoiceAccuracy: 0.637499988079071
metrics/lambada/0-shot/InContextLearningLMAccuracy: 0.4075581431388855
What issue(s) does this change relate to?
ICL eval doesn't work with OPT tokenizer since it adds special tokens between contexts/continuations.
Before submitting
pre-commit
on your change? (see thepre-commit
section of prerequisites)